Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Wearable sensors, stretchable electronics, and many soft robotic materials must have a balance of conductivity, stretchability, and robustness. Intrinsically conductive polymers offer a critical step toward improving wearable sensor materials due to their tunable conductivity, soft/compliant nature, and ability to complex with other coactive molecules (i.e., polyacids, small molecules). The addition of synergistic nanofillers has been shown to enhance the conductivity, self-healing, and mechanical properties of the polymers for soft robotics and wearable applications. The development of a robust polymer nanocomposite material that offers ultra-stretchability, an autonomous self-healing ability, and enhanced electronic properties has long eluded researchers. Herein, we show an aqueous polyaniline [PANI]:poly(2-acrylamido-2-methylpropane sulfonic acid) [PAAMPSA]:phytic acid [PA] polymer complex synthesized with 0.5 wt % silver nanowires (AgNW) to form a polymer nanocomposite with high electronic sensitivity, unique mechanical properties (a maximum strain of 4693%) and repeatable/autonomous self-healing efficiencies of greater than 98%. This AgNW polymer complex has an engineering strain higher than any reported hydrogel or other polymer-based sensor materials, in which the interface between the polymer matrix and the AgNW is hypothesized to be integral for the formation of the active electrically conductive network and unprecedented mechanical properties. To illustrate the remarkable sensitivity, the material was employed as a biomedical sensor (pulse, voice recognition, motion), topographical sensor, and high-sensitivity strain gauge.more » « less
-
Kernel survival analysis models estimate individual survival distributions with the help of a kernel function, which measures the similarity between any two data points. Such a kernel function can be learned using deep kernel survival models. In this paper, we present a new deep kernel survival model called a survival kernet, which scales to large datasets in a manner that is amenable to model interpretation and also theoretical analysis. Specifically, the training data are partitioned into clusters based on a recently developed training set compression scheme for classification and regression called kernel netting that we extend to the survival analysis setting. At test time, each data point is represented as a weighted combination of these clusters, and each such cluster can be visualized. For a special case of survival kernets, we establish a finite-sample error bound on predicted survival distributions that is, up to a log factor, optimal. Whereas scalability at test time is achieved using the aforementioned kernel netting compression strategy, scalability during training is achieved by a warm-start procedure based on tree ensembles such as XGBoost and a heuristic approach to accelerating neural architecture search. On four standard survival analysis datasets of varying sizes (up to roughly 3 million data points), we show that survival kernets are highly competitive compared to various baselines tested in terms of time-dependent concordance index. Our code is available at: https://github.com/georgehc/survival-kernetsmore » « less
-
We consider the problem of predicting how the likelihood of an outcome of interest for a patient changes over time as we observe more of the patient’s data. To solve this problem, we propose a supervised contrastive learning framework that learns an embedding representation for each time step of a patient time series. Our framework learns the embedding space to have the following properties: (1) nearby points in the embedding space have similar predicted class probabilities, (2) adjacent time steps of the same time series map to nearby points in the embedding space, and (3) time steps with very different raw feature vectors map to far apart regions of the embedding space. To achieve property (3), we employ a nearest neighbor pairing mechanism in the raw feature space. This mechanism also serves as an alternative to "data augmentation", a key ingredient of contrastive learning, which lacks a standard procedure that is adequately realistic for clinical tabular data, to our knowledge. We demonstrate that our approach outperforms state-of-the-art baselines in predicting mortality of septic patients (MIMIC-III dataset) and tracking progression of cognitive impairment (ADNI dataset). Our method also consistently recovers the correct synthetic dataset embedding structure across experiments, a feat not achieved by baselines. Our ablation experiments show the pivotal role of our nearest neighbor pairing.more » « less
An official website of the United States government

Full Text Available